40 research outputs found

    Exploiting the Path Propagation Time Differences in Multipath Transmission with FEC

    Get PDF
    We consider a transmission of a delay-sensitive data stream from a single source to a single destination. The reliability of this transmission may suffer from bursty packet losses - the predominant type of failures in today's Internet. An effective and well studied solution to this problem is to protect the data by a Forward Error Correction (FEC) code and send the FEC packets over multiple paths. In this paper we show that the performance of such a multipath FEC scheme can often be further improved. Our key observation is that the propagation times on the available paths often significantly differ, typically by 10-100ms. We propose to exploit these differences by appropriate packet scheduling that we call `Spread'. We evaluate our solution with a precise, analytical formulation and trace-driven simulations. Our studies show that Spread substantially outperforms the state-of-the-art solutions. It typically achieves two- to five-fold improvement (reduction) in the effective loss rate. Or conversely, keeping the same level of effective loss rate, Spread significantly decreases the observed delays and helps fighting the delay jitter.Comment: 12 page

    2.5K-Graphs: from Sampling to Generation

    Get PDF
    Understanding network structure and having access to realistic graphs plays a central role in computer and social networks research. In this paper, we propose a complete, and practical methodology for generating graphs that resemble a real graph of interest. The metrics of the original topology we target to match are the joint degree distribution (JDD) and the degree-dependent average clustering coefficient (cˉ(k)\bar{c}(k)). We start by developing efficient estimators for these two metrics based on a node sample collected via either independence sampling or random walks. Then, we process the output of the estimators to ensure that the target properties are realizable. Finally, we propose an efficient algorithm for generating topologies that have the exact target JDD and a cˉ(k)\bar{c}(k) close to the target. Extensive simulations using real-life graphs show that the graphs generated by our methodology are similar to the original graph with respect to, not only the two target metrics, but also a wide range of other topological metrics; furthermore, our generator is order of magnitudes faster than state-of-the-art techniques

    Towards Unbiased BFS Sampling

    Full text link
    Breadth First Search (BFS) is a widely used approach for sampling large unknown Internet topologies. Its main advantage over random walks and other exploration techniques is that a BFS sample is a plausible graph on its own, and therefore we can study its topological characteristics. However, it has been empirically observed that incomplete BFS is biased toward high-degree nodes, which may strongly affect the measurements. In this paper, we first analytically quantify the degree bias of BFS sampling. In particular, we calculate the node degree distribution expected to be observed by BFS as a function of the fraction f of covered nodes, in a random graph RG(pk) with an arbitrary degree distribution pk. We also show that, for RG(pk), all commonly used graph traversal techniques (BFS, DFS, Forest Fire, Snowball Sampling, RDS) suffer from exactly the same bias. Next, based on our theoretical analysis, we propose a practical BFS-bias correction procedure. It takes as input a collected BFS sample together with its fraction f. Even though RG(pk) does not capture many graph properties common in real-life graphs (such as assortativity), our RG(pk)-based correction technique performs well on a broad range of Internet topologies and on two large BFS samples of Facebook and Orkut networks. Finally, we consider and evaluate a family of alternative correction procedures, and demonstrate that, although they are unbiased for an arbitrary topology, their large variance makes them far less effective than the RG(pk)-based technique.Comment: BFS, RDS, graph traversal, sampling bias correctio

    Robustness to failures in two-layer communication networks

    Get PDF
    A close look at many existing systems reveals their two- or multi-layer nature, where a number of coexisting networks interact and depend on each other. For instance, in the Internet, any application-level graph (such as a peer-to-peer network) is mapped on the underlying IP network that, in turn, is mapped on a mesh of optical fibers. This layered view sheds new light on the tolerance to errors and attacks of many complex systems. What is observed at a single layer does not necessarily reflect well the state of the entire system. On the contrary, a tiny, seemingly harmless disruption of one layer, may destroy a substantial or essential part of another layer, thus making the whole system useless in practice. In this thesis we consider such two-layer systems. We model them by two graphs at two different layers, where the upper-layer (or logical) graph is mapped onto the lower-layer (physical) graph. Our main goals are the following. First, we study the robustness to failures of existing large-scale two-layer systems. This brings us some valuable insights into the problem, e.g., by identifying common weak points in such systems. Fortunately, these two-layer problems can often be effectively alleviated by a careful system design. Therefore, our second major goal is to propose new designs that increase the robustness of two-layer systems. This thesis is organized in three main parts, where we focus on different examples and aspects of the two-layer system. In the first part, we turn our attention to the existing large-scale two-layer systems, such as peer-to-peer networks, railway networks and the human brain. Our main goal is to study the vulnerability of these systems to random errors and targeted attacks. Our simulations show that (i) two-layer systems are much more vulnerable to errors and attacks than they appear from a single layer perspective, and (ii) attacks are much more harmful than errors, especially when the logical topology is heterogeneous. These results hold across all studied systems. A natural next step consists in improving the failure robustness of two-layer systems. In particular, in the second part of this thesis, we consider the IP/WDM optical networks, where an IP backbone network is mapped on a mesh of optical fibers. The problem lies in designing a survivable mapping, such that no single physical failure disconnects the logical topology. This is an NP-complete problem. We introduce a new concept of piecewise survivability, which makes the problem much easier in practice. This leads us to an efficient and scalable algorithm called SMART, which finds a survivable mapping much faster (often by orders of magnitude) than the other approaches proposed to date. Moreover, the formal analysis of SMART allows us to prove that a given survivable mapping does or does not exist. Finally, this approach helps us to find vulnerable areas in the system, and to effectively reinforce them, e.g., by adding new links. In the third part of this thesis, we shift our attention one layer higher, to the application-over-IP setting. In particular, we consider the design of Application-Level Multicast (ALM) for interactive applications, where a single source sends a delay-constrained data stream to a number of destinations. Interactive ALM should (i) respect stringent delay requirements, and (ii) proactively protect the system against overlay node failures and against (iii) the packet losses at the IP layer. We propose a two-layer-aware approach to this problem. First, we prove that the average packet loss rate observed at the destinations can be effectively approximated by a purely topological metric that, in turn, drops with the amount of IP-level and overlay-level path diversity available in the system. Therefore, we propose a framework that accommodates and generalizes various techniques to increase the path diversity in the system. Within this framework we optimize the structure of ALM. As a result, we reduce the effective loss rate of real Internet topologies by typically 30%-70%, compared to the state of the art. Finally, in addition to the three main parts of the thesis, we also present a set of results inspired by the study of ALM systems, but not directly related to the 'two-layer' paradigm (and thus moved to the Appendix). In particular, we consider a transmission of a delay-sensitive data stream from a single source to a single destination, where the data packets are protected by a Forward Error Correction (FEC) code and sent over multiple paths. We show that the performance of such a scheme can often be further improved. Our key observation is that the propagation times on the available paths often significantly differ, typically by 10-100ms. We propose to exploit these differences by appropriate packet scheduling, which results in a two- to five-fold improvement (reduction) in the effective loss rate

    Active Learning of Multiple Source Multiple Destination Topologies

    Get PDF
    We consider the problem of inferring the topology of a network with MM sources and NN receivers (hereafter referred to as an MM-by-NN network), by sending probes between the sources and receivers. Prior work has shown that this problem can be decomposed into two parts: first, infer smaller subnetwork components (i.e., 11-by-NN's or 22-by-22's) and then merge these components to identify the MM-by-NN topology. In this paper, we focus on the second part, which had previously received less attention in the literature. In particular, we assume that a 11-by-NN topology is given and that all 22-by-22 components can be queried and learned using end-to-end probes. The problem is which 22-by-22's to query and how to merge them with the given 11-by-NN, so as to exactly identify the 22-by-NN topology, and optimize a number of performance metrics, including the number of queries (which directly translates into measurement bandwidth), time complexity, and memory usage. We provide a lower bound, N2\lceil \frac{N}{2} \rceil, on the number of 22-by-22's required by any active learning algorithm and propose two greedy algorithms. The first algorithm follows the framework of multiple hypothesis testing, in particular Generalized Binary Search (GBS), since our problem is one of active learning, from 22-by-22 queries. The second algorithm is called the Receiver Elimination Algorithm (REA) and follows a bottom-up approach: at every step, it selects two receivers, queries the corresponding 22-by-22, and merges it with the given 11-by-NN; it requires exactly N1N-1 steps, which is much less than all (N2)\binom{N}{2} possible 22-by-22's. Simulation results over synthetic and realistic topologies demonstrate that both algorithms correctly identify the 22-by-NN topology and are near-optimal, but REA is more efficient in practice

    Layered Complex Networks

    Get PDF
    Many complex networks are only a part of larger systems, where a number of coexisting topologies interact and depend on each other. We introduce a layered model to facilitate the description and analysis of such systems. As an example of its application, we study the load distribution in three transportation systems, where the lower layer is the physical infrastructure and the upper layer represents the traffic flows. This layered view allows us to capture the fundamental differences between the real load and commonly used load estimators, which explains why these estimators fail to approximate the real load

    Application of a hash function to discourage MAC-layer misbehaviour in wireless LANs, Journal of Telecommunications and Information Technology, 2004, nr 2

    Get PDF
    Contention-based MAC protocols for wireless ad hoc LANs rely on random deferment of packet transmissions to avoid collisions. By selfishly modifying the probabilities of deferments greedy stations can grab more bandwidth than regular stations that apply standard-prescribed probabilities. To discourage such misbehaviour we propose a protocol calledRT-hash whereby the winner of a contention is determined using a public hash function of the channel feedback. RT-hash is effective in a full hearability topology, assuming that improper timing of control frames is detectable and that greedy stations do not resort to malicious actions. Simulation experiments show that RT-hash protects regular stations' bandwidth share against various sophisticated greedy strategies of deferment selection; as such it may contribute to MAC-layer network security

    Error and Attack Tolerance of Layered Complex Networks

    Get PDF
    Many complex systems may be described not by one, but by a number of complex networks mapped one on the other in a multilayer structure. The interactions and dependencies between these layers cause that what is true for a distinct single layer does not necessarily reflect well the state of the entire system. In this paper we study the robustness of three real-life examples of two-layer complex systems that come from the fields of communication (the Internet), transportation (the European railway system) and biology (the human brain). In order to cover the whole range of features specific to these systems, we focus on two extreme policies of system's response to failures, no rerouting and full rerouting. Our main finding is that multilayer systems are much more vulnerable to errors and intentional attacks than they seem to be from a single layer perspective.Comment: 5 pages, 3 figure
    corecore